64 research outputs found

    Derandomization with Minimal Memory Footprint

    Get PDF
    Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1. Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before

    Two-Source Condensers with Low Error and Small Entropy Gap via Entropy-Resilient Functions

    Get PDF
    In their seminal work, Chattopadhyay and Zuckerman (STOC\u2716) constructed a two-source extractor with error epsilon for n-bit sources having min-entropy {polylog}(n/epsilon). Unfortunately, the construction\u27s running-time is {poly}(n/epsilon), which means that with polynomial-time constructions, only polynomially-small errors are possible. Our main result is a {poly}(n,log(1/epsilon))-time computable two-source condenser. For any k >= {polylog}(n/epsilon), our condenser transforms two independent (n,k)-sources to a distribution over m = k-O(log(1/epsilon)) bits that is epsilon-close to having min-entropy m - o(log(1/epsilon)). Hence, achieving entropy gap of o(log(1/epsilon)). The bottleneck for obtaining low error in recent constructions of two-source extractors lies in the use of resilient functions. Informally, this is a function that receives input bits from r players with the property that the function\u27s output has small bias even if a bounded number of corrupted players feed adversarial inputs after seeing the inputs of the other players. The drawback of using resilient functions is that the error cannot be smaller than ln r/r. This, in return, forces the running time of the construction to be polynomial in 1/epsilon. A key component in our construction is a variant of resilient functions which we call entropy-resilient functions. This variant can be seen as playing the above game for several rounds, each round outputting one bit. The goal of the corrupted players is to reduce, with as high probability as they can, the min-entropy accumulated throughout the rounds. We show that while the bias decreases only polynomially with the number of players in a one-round game, their success probability decreases exponentially in the entropy gap they are attempting to incur in a repeated game

    Probabilistic Logarithmic-Space Algorithms for Laplacian Solvers

    Get PDF
    A recent series of breakthroughs initiated by Spielman and Teng culminated in the construction of nearly linear time Laplacian solvers, approximating the solution of a linear system Lx=b, where L is the normalized Laplacian of an undirected graph. In this paper we study the space complexity of the problem. Surprisingly we are able to show a probabilistic, logspace algorithm solving the problem. We further extend the algorithm to other families of graphs like Eulerian graphs (and directed regular graphs) and graphs that mix in polynomial time. Our approach is to pseudo-invert the Laplacian, by first "peeling-off" the problematic kernel of the operator, and then to approximate the inverse of the remaining part by using a Taylor series. We approximate the Taylor series using a previous work and the special structure of the problem. For directed graphs we exploit in the analysis the Jordan normal form and results from matrix functions

    Log-Seed Pseudorandom Generators via Iterated Restrictions

    Get PDF

    Near-Optimal Pseudorandom Generators for Constant-Depth Read-Once Formulas

    Get PDF
    We give an explicit pseudorandom generator (PRG) for read-once AC^0, i.e., constant-depth read-once formulas over the basis {wedge, vee, neg} with unbounded fan-in. The seed length of our PRG is O~(log(n/epsilon)). Previously, PRGs with near-optimal seed length were known only for the depth-2 case [Gopalan et al., 2012]. For a constant depth d > 2, the best prior PRG is a recent construction by Forbes and Kelley with seed length O~(log^2 n + log n log(1/epsilon)) for the more general model of constant-width read-once branching programs with arbitrary variable order [Michael A. Forbes and Zander Kelley, 2018]. Looking beyond read-once AC^0, we also show that our PRG fools read-once AC^0[oplus] with seed length O~(t + log(n/epsilon)), where t is the number of parity gates in the formula. Our construction follows Ajtai and Wigderson\u27s approach of iterated pseudorandom restrictions [Ajtai and Wigderson, 1989]. We assume by recursion that we already have a PRG for depth-d AC^0 formulas. To fool depth-(d + 1) AC^0 formulas, we use the given PRG, combined with a small-bias distribution and almost k-wise independence, to sample a pseudorandom restriction. The analysis of Forbes and Kelley [Michael A. Forbes and Zander Kelley, 2018] shows that our restriction approximately preserves the expectation of the formula. The crux of our work is showing that after poly(log log n) independent applications of our pseudorandom restriction, the formula simplifies in the sense that every gate other than the output has only polylog n remaining children. Finally, as the last step, we use a recent PRG by Meka, Reingold, and Tal [Meka et al., 2019] to fool this simpler formula

    On Hitting-Set Generators for Polynomials That Vanish Rarely

    Get PDF
    The problem of constructing hitting-set generators for polynomials of low degree is fundamental in complexity theory and has numerous well-known applications. We study the following question, which is a relaxation of this problem: Is it easier to construct a hitting-set generator for polynomials p: ?? ? ? of degree d if we are guaranteed that the polynomial vanishes on at most an ? > 0 fraction of its inputs? We will specifically be interested in tiny values of ?? d/|?|. This question was first considered by Goldreich and Wigderson (STOC 2014), who studied a specific setting geared for a particular application, and another specific setting was later studied by the third author (CCC 2017). In this work our main interest is a systematic study of the relaxed problem, in its general form, and we prove results that significantly improve and extend the two previously-known results. Our contributions are of two types: - Over fields of size 2 ? |?| ? poly(n), we show that the seed length of any hitting-set generator for polynomials of degree d ? n^{.49} that vanish on at most ? = |?|^{-t} of their inputs is at least ?((d/t)?log(n)). - Over ??, we show that there exists a (non-explicit) hitting-set generator for polynomials of degree d ? n^{.99} that vanish on at most ? = |?|^{-t} of their inputs with seed length O((d-t)?log(n)). We also show a polynomial-time computable hitting-set generator with seed length O((d-t)?(2^{d-t}+log(n))). In addition, we prove that the problem we study is closely related to the following question: "Does there exist a small set S ? ?? whose degree-d closure is very large?", where the degree-d closure of S is the variety induced by the set of degree-d polynomials that vanish on S

    Spectral Sparsification via Bounded-Independence Sampling

    Get PDF
    We give a deterministic, nearly logarithmic-space algorithm for mild spectral sparsification of undirected graphs. Given a weighted, undirected graph GG on nn vertices described by a binary string of length NN, an integer klognk\leq \log n, and an error parameter ϵ>0\epsilon > 0, our algorithm runs in space O~(klog(Nwmax/wmin))\tilde{O}(k\log (N\cdot w_{\mathrm{max}}/w_{\mathrm{min}})) where wmaxw_{\mathrm{max}} and wminw_{\mathrm{min}} are the maximum and minimum edge weights in GG, and produces a weighted graph HH with O~(n1+2/k/ϵ2)\tilde{O}(n^{1+2/k}/\epsilon^2) edges that spectrally approximates GG, in the sense of Spielmen and Teng [ST04], up to an error of ϵ\epsilon. Our algorithm is based on a new bounded-independence analysis of Spielman and Srivastava's effective resistance based edge sampling algorithm [SS08] and uses results from recent work on space-bounded Laplacian solvers [MRSV17]. In particular, we demonstrate an inherent tradeoff (via upper and lower bounds) between the amount of (bounded) independence used in the edge sampling algorithm, denoted by kk above, and the resulting sparsity that can be achieved.Comment: 37 page

    Near-Optimal Erasure List-Decodable Codes

    Get PDF

    Pseudorandom Generators for Read-Once Monotone Branching Programs

    Get PDF

    Error Reduction for Weighted PRGs Against Read Once Branching Programs

    Get PDF
    corecore